55 research outputs found
An Improved Affine Equivalence Algorithm for Random Permutations
In this paper we study the affine equivalence problem, where given two functions , the goal is to determine whether there exist invertible affine transformations over such that . Algorithms for this problem have several well-known applications in the design and analysis of Sboxes, cryptanalysis of white-box ciphers and breaking a generalized Even-Mansour scheme.
We describe a new algorithm for the affine equivalence problem and focus on the variant where are permutations over -bit words, as it has the widest applicability. The complexity of our algorithm is about bit operations with very high probability whenever (or is a random permutation. This improves upon the best known algorithms for this problem (published by Biryukov et al. at EUROCRYPT 2003), where the first algorithm has time complexity of and the second has time complexity of about and roughly the same memory complexity.
Our algorithm is based on a new structure (called a \emph{rank table}) which is used to analyze particular algebraic properties of a function that remain invariant under invertible affine transformations. Besides its standard application in our new algorithm, the rank table is of independent interest and we discuss several of its additional potential applications
An Algorithmic Framework for the Generalized Birthday Problem
The generalized birthday problem (GBP) was introduced by Wagner in 2002 and has shown to have many applications in cryptanalysis. In its typical variant, we are given access to a function (whose specification depends on the underlying problem) and an integer . The goal is to find distinct inputs to (denoted by ) such that . Wagner\u27s K-tree algorithm solves the problem in time and memory complexities of about (where ). Two important open problems raised by Wagner were (1) devise efficient time-memory tradeoffs for GBP, and (2) reduce the complexity of the K-tree algorithm for which is not a power of 2.
In this paper, we make progress in both directions. First, we improve the best know GBP time-memory tradeoff curve (published by independently by NikoliΔ and Sasaki and also by Biryukov and Khovratovich) for all from to , applicable for a large range of parameters. For example, for we improve the best previous tradeoff from to and for the improvement is from to .
Next, we consider values of which are not powers of 2 and show that in many cases even more efficient time-memory tradeoff curves can be obtained. Most interestingly, for we present algorithms with the same time complexities as the K-tree algorithm, but with significantly reduced memory complexities. In particular, for the K-tree algorithm achieves , whereas we obtain and . For , Wagner\u27s algorithm achieves , while we obtain and . This gives the first significant improvement over the K-tree algorithm for small .
Finally, we optimize our techniques for several concrete GBP instances and show how to solve some of them with improved time and memory complexities compared to the state-of-the-art.
Our results are obtained using a framework that combines several algorithmic techniques such as variants of the Schroeppel-Shamir algorithm for solving knapsack problems (devised in works by Howgrave-Graham and Joux and by Becker, Coron and Joux) and dissection algorithms (published by Dinur, Dunkelman, Keller and Shamir). It then builds on these techniques to develop new GBP algorithms
Side Channel Cube Attacks on Block Ciphers
In this paper we formalize the notion of {\it leakage attacks} on
iterated block ciphers, in which the attacker can find (via
physical probing, power measurement, or any other type of side
channel) one bit of information about the intermediate state of
the encryption after each round. Since bits computed during the
early rounds can be typically represented by low degree
multivariate polynomials, cube attacks seem to be an ideal generic
key recovery technique in these situations. However, the original
cube attack requires extremely clean data, whereas the information
provided by side channel attacks can be quite noisy. To address
this problem, we develop a new variant of cube attack which can
tolerate considerable levels of noise (affecting more than 11\% of
the leaked bits in practical scenarios). Finally, we demonstrate
our approach by describing efficient leakage attacks on two of the
best known block ciphers, AES (requiring about time for
full key recovery) and SERPENT (requiring about time for
full key recovery)
Refined Cryptanalysis of the GPRS Ciphers GEA-1 and GEA-2
At EUROCRYPT~2021, Beierle et al. presented the first public analysis of the GPRS ciphers GEA-1 and GEA-2. They showed that although GEA-1 uses a 64-bit session key, it can be recovered with the knowledge of only 65 bits of keystream in time using GiB of memory. The attack exploits a weakness in the initialization process of the cipher that was presumably hidden intentionally by the designers to reduce its security.
While no such weakness was found for GEA-2, the authors presented an attack on this cipher with time complexity of about . The main practical obstacle is the required knowledge of 12800 bits of keystream used to encrypt a full GPRS frame. Variants of the attack are applicable (but more expensive) when given less consecutive keystream bits, or when the available keystream is fragmented (it contains no long consecutive block).
In this paper, we improve and complement the previous analysis of GEA-1 and GEA-2.
For GEA-1, we devise an attack in which the memory complexity is reduced by a factor of about from GiB to about 4 MiB, while the time complexity remains . Our implementation recovers the GEA-1 session key in average time of 2.5~hours on a modern laptop.
For GEA-2, we describe two attacks that complement the analysis of Beierle et al. The first attack obtains a linear tradeoff between the number of consecutive keystream bits available to the attacker (denoted by ) and the time complexity. It improves upon the previous attack in the range of (roughly) . Specifically, for the complexity of our attack is about , while the previous one is not faster than the brute force complexity. In case the available keystream is fragmented, our second attack reduces the memory complexity of the previous attack by a factor of from 32 GiB to 64 MiB with no time complexity penalty.
Our attacks are based on new combinations of stream cipher cryptanalytic techniques and algorithmic techniques used in other contexts (such as solving the -XOR problem)
An Improved Algebraic Attack on Hamsi-256
Hamsi is one of the second-stage candidates in NIST\u27s SHA-3
competition. The only previous attack on this hash function was a
very marginal attack on its 256-bit version published by Thomas Fuhr
at Asiacrypt , which is better than generic attacks only for
very short messages of fewer than 32-bit blocks, and is only
times faster than a straightforward exhaustive search attack. In
this paper we describe a different algebraic attack which is less
marginal: It is better than the best known generic attack for all
practical message sizes (up to gigabytes), and it outperforms
exhaustive search by a factor of at least . The attack is based
on the observation that in order to discard a possible second
preimage, it suffices to show that one of its hashed output bits is
wrong. Since the output bits of the compression function of Hamsi-256
can be described by low degree polynomials, it is actually faster to
compute a small number of output bits by a fast polynomial evaluation
technique rather than via the official algorithm
Fine-Grained Cryptanalysis: Tight Conditional Bounds for Dense k-SUM and k-XOR
An average-case variant of the -SUM conjecture asserts that finding numbers that sum to 0 in a list of random numbers, each of the order , cannot be done in much less than time. On the other hand, in the dense regime of parameters, where the list contains more numbers and many solutions exist, the complexity of finding one of them can be significantly improved by Wagner\u27s -tree algorithm. Such algorithms for -SUM in the dense regime have many applications, notably in cryptanalysis.
In this paper, assuming the average-case -SUM conjecture, we prove that known algorithms are essentially optimal for . For , we prove the optimality of the -tree algorithm for a limited range of parameters. We also prove similar results for -XOR, where the sum is replaced with exclusive or.
Our results are obtained by a self-reduction that, given an instance of -SUM which has a few solutions, produces from it many instances in the dense regime. We solve each of these instances using the dense -SUM oracle, and hope that a solution to a dense instance also solves the original problem. We deal with potentially malicious oracles (that repeatedly output correlated useless solutions) by an obfuscation process that adds noise to the dense instances. Using discrete Fourier analysis, we show that the obfuscation eliminates correlations among the oracle\u27s solutions, even though its inputs are highly correlated
Locality-Preserving Hashing for Shifts with Connections to Cryptography
Can we sense our location in an unfamiliar environment by taking a
sublinear-size sample of our surroundings? Can we efficiently encrypt a message
that only someone physically close to us can decrypt? To solve this kind of
problems, we introduce and study a new type of hash functions for finding
shifts in sublinear time. A function is a
{\em locality-preserving hash function for shifts} (LPHS) if: (1)
can be computed by (adaptively) querying bits of its input, and (2)
, where is random and
denotes a cyclic shift by one bit to the left. We make the following
contributions.
* Near-optimal LPHS via Distributed Discrete Log: We establish a general
two-way connection between LPHS and algorithms for distributed discrete
logarithm in the generic group model. Using such an algorithm of Dinur et al.
(Crypto 2018), we get LPHS with near-optimal error of .
This gives an unusual example for the usefulness of group-based cryptography in
a post-quantum world. We extend the positive result to non-cyclic and
worst-case variants of LPHS.
* Multidimensional LPHS: We obtain positive and negative results for a
multidimensional extension of LPHS, making progress towards an optimal
2-dimensional LPHS.
* Applications: We demonstrate the usefulness of LPHS by presenting
cryptographic and algorithmic applications. In particular, we apply
multidimensional LPHS to obtain an efficient "packed" implementation of
homomorphic secret sharing and a sublinear-time implementation of
location-sensitive encryption whose decryption requires a significantly
overlapping view
Efficient Dissection of Bicomposite Problems with Cryptanalytic Applications
In this paper we show that a large class of diverse problems have a
bicomposite structure which makes it possible to solve them with a new
type of algorithm called {\it dissection}, which has much better
time/memory tradeoffs than previously known algorithms. A typical example is the
problem of finding the key of multiple encryption schemes with independent
-bit keys. All the previous error-free attacks required time and
memory satisfying , and even if ``false negatives\u27\u27 are allowed,
no attack could achieve . Our new technique yields the first
algorithm which never errs and finds all the possible keys with a smaller
product of , such as time and memory for breaking
the sequential execution of r=7 block ciphers. The improvement ratio we obtain
increases in an unbounded way as increases, and if we allow algorithms
which can sometimes miss solutions, we can get even better tradeoffs by
combining our dissection technique with parallel collision search.
To demonstrate the generality of the new dissection technique, we show how
to use it in a generic way in order to improve rebound attacks on hash
functions and to solve with better time complexities (for small memory complexities)
hard combinatorial search problems, such as the well known knapsack problem
Memory-Efficient Algorithms for Finding Needles in Haystacks
One of the most common tasks in cryptography and cryptanalysis is to find
some interesting event (a needle) in an exponentially large collection (haystack) of
possible events, or to demonstrate that no such event is likely to
exist. In particular, we are interested in finding needles which are defined as events that
happen with an unusually high probability of in a haystack which is an almost uniform
distribution on possible events. When the search algorithm can
only sample values from this distribution, the best known time/memory
tradeoff for finding such an event requires time given
memory.
In this paper we develop much faster needle searching algorithms in the common
cryptographic setting in which the distribution is defined
by applying some deterministic function to random inputs.
Such a distribution can be modelled by a random directed graph with vertices in
which almost all the vertices have predecessors while
the vertex we are looking for has an unusually large number of predecessors.
When we are given only a constant amount of memory, we propose a new search methodology which we call
\textbf{NestedRho}. As increases, such random graphs undergo several subtle phase transitions,
and thus the log-log dependence of the time complexity on
becomes a piecewise linear curve which bends four times. Our new algorithm is faster than the
time complexity of the best previous algorithm in the full range of , and in particular
it improves the previous time complexity by a significant factor of for any in the range . When we are given more memory, we show how to combine the \textbf{NestedRho} technique with the parallel collision
search technique in order to further reduce its time complexity. Finally, we show how to apply our new search
technique to more complicated distributions with multiple peaks when we want to find all the peaks whose
probabilities are higher than
- β¦